Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Journal of Prevention and Treatment for Stomatological Diseases ; (12): 641-646, 2023.
Article in Chinese | WPRIM | ID: wpr-974740

ABSTRACT

Objective@#To study the effect of artificial intelligence in the pathological diagnosis of periapical cysts and to explore the application of artificial intelligence in the field of oral pathology.@*Methods@#Pathological images of eighty-seven periapical cysts were selected as subjects to read, and a neural network with a U-net structure was constructed. The 87 HE images and labeled images of periapical cysts were divided into a training set (72 images) and a test set (15 images), which were used in the training model and test model, respectively. Finally, the target level index F1 score, pixel level index Dice coefficient and receiver operating characteristic (ROC) curve were used to evaluate the ability of the U-net model to recognize periapical cyst epithelium.@*Results @# The F1 score of the U-net network model for recognizing periapical cyst epithelium was 0.75, and the Dice index and the areas under the ROC curve were 0.685 and 0.878, respectively.@*Conclusion@#The U-net network model constructed by artificial intelligence has a good segmentation result in identifying periapical cyst epithelium, which can be preliminarily applied in the pathological diagnosis of periapical cysts and is expected to be gradually popularized in clinical practice after further verification with large samples.

2.
Journal of Southern Medical University ; (12): 1224-1232, 2023.
Article in Chinese | WPRIM | ID: wpr-987039

ABSTRACT

OBJECTIVE@#To propose a diffusion tensor field estimation network based on 3D U-Net and diffusion tensor imaging (DTI) model constraint (3D DTI-Unet) to accurately estimate DTI quantification parameters from a small number of diffusion-weighted (DW) images with a low signal-to-noise ratio.@*METHODS@#The input of 3D DTI-Unet was noisy diffusion magnetic resonance imaging (dMRI) data containing one non-DW image and 6 DW images with different diffusion coding directions. The noise-reduced non-DW image and accurate diffusion tensor field were predicted through 3D U-Net. The dMRI data were reconstructed using the DTI model and compared with the true value of dMRI data to optimize the network and ensure the consistency of the dMRI data with the physical model of the diffusion tensor field. We compared 3D DTI-Unet with two DW image denoising algorithms (MP-PCA and GL-HOSVD) to verify the effect of the proposed method.@*RESULTS@#The proposed method was better than MP-PCA and GL-HOSVD in terms of quantitative results and visual evaluation of DW images, diffusion tensor field and DTI quantification parameters.@*CONCLUSION@#The proposed method can obtain accurate DTI quantification parameters from one non-DW image and 6 DW images to reduce image acquisition time and improve the reliability of quantitative diagnosis.


Subject(s)
Diffusion Tensor Imaging , Reproducibility of Results , Diffusion Magnetic Resonance Imaging , Algorithms , Signal-To-Noise Ratio
3.
Journal of Southern Medical University ; (12): 620-630, 2023.
Article in Chinese | WPRIM | ID: wpr-986970

ABSTRACT

OBJECTIVE@#To propose a semi-supervised material quantitative intelligent imaging algorithm based on prior information perception learning (SLMD-Net) to improve the quality and precision of spectral CT imaging.@*METHODS@#The algorithm includes a supervised and a self- supervised submodule. In the supervised submodule, the mapping relationship between low and high signal-to-noise ratio (SNR) data was constructed through mean square error loss function learning based on a small labeled dataset. In the self- supervised sub-module, an image recovery model was utilized to construct the loss function incorporating the prior information from a large unlabeled low SNR basic material image dataset, and the total variation (TV) model was used to to characterize the prior information of the images. The two submodules were combined to form the SLMD-Net method, and pre-clinical simulation data were used to validate the feasibility and effectiveness of the algorithm.@*RESULTS@#Compared with the traditional model-driven quantitative imaging methods (FBP-DI, PWLS-PCG, and E3DTV), data-driven supervised-learning-based quantitative imaging methods (SUMD-Net and BFCNN), a material quantitative imaging method based on unsupervised learning (UNTV-Net) and semi-supervised learning-based cycle consistent generative adversarial network (Semi-CycleGAN), the proposed SLMD-Net method had better performance in both visual and quantitative assessments. For quantitative imaging of water and bone materials, the SLMD-Net method had the highest PSNR index (31.82 and 29.06), the highest FSIM index (0.95 and 0.90), and the lowest RMSE index (0.03 and 0.02), respectively) and achieved significantly higher image quality scores than the other 7 material decomposition methods (P < 0.05). The material quantitative imaging performance of SLMD-Net was close to that of the supervised network SUMD-Net trained with labeled data with a doubled size.@*CONCLUSIONS@#A small labeled dataset and a large unlabeled low SNR material image dataset can be fully used to suppress noise amplification and artifacts in basic material decomposition in spectral CT and reduce the dependence on labeled data-driven network, which considers more realistic scenario in clinics.


Subject(s)
Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods , Algorithms , Signal-To-Noise Ratio , Perception
4.
Chinese Journal of Medical Instrumentation ; (6): 402-405, 2023.
Article in Chinese | WPRIM | ID: wpr-982253

ABSTRACT

OBJECTIVE@#In order to improve the accuracy of the current pulmonary nodule location detection method based on CT images, reduce the problem of missed detection or false detection, and effectively assist imaging doctors in the diagnosis of pulmonary nodules.@*METHODS@#Propose a novel method for detecting the location of pulmonary nodules based on multiscale convolution. First, image preprocessing methods are used to eliminate the noise and artifacts in lung CT images. Second, multiple adjacent single-frame CT images are selected to be concatenate into multi-frame images, and the feature extraction is carried out through the artificial neural network model U-Net improved by multi-scale convolution to enhanced feature extraction capability for pulmonary nodules of different sizes and shapes, so as to improve the accuracy of feature extraction of pulmonary nodules. Finally, using point detection to improve the loss function of U-Net training process, the accuracy of pulmonary nodule location detection is improved.@*RESULTS@#The accuracy of detecting pulmonary nodules equal or larger than 3 mm and smaller than 3 mm are 98.02% and 96.94% respectively.@*CONCLUSIONS@#This method can effectively improve the detection accuracy of pulmonary nodules on CT image sequence, and can better meet the diagnostic needs of pulmonary nodules.


Subject(s)
Humans , Lung Neoplasms/diagnostic imaging , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed , Neural Networks, Computer
5.
Chinese Journal of Radiological Medicine and Protection ; (12): 611-617, 2022.
Article in Chinese | WPRIM | ID: wpr-956833

ABSTRACT

Objective:To establish a three-dimensional (3D) U-net-based deep learning model, and to predict the 3D dose distribution in CT-guided cervical cancer brachytherapy by using the established model.Methods:The brachytherapy plans of 114 cervical cancer cases with a prescription dose of 6 Gy for each case were studied. These cases were divided into training, validation, and testing groups, including 84, 11, and 19 patients, respectively. A total of 500 epochs of training were performed by using a 3D U-net model. Then, the dosimetric parameters of the testing groups were individually evaluated, including the mean dose deviation (MDD) and mean absolute dose deviation (MADD) at the voxel level, the Dice similarity coefficient (DSC) of the volumes enclosed by isodose surfaces, the conformal index (CI) of the prescription dose, the D90 and average dose Dmean delivered to high-risk clinical target volumes (HR-CTVs), and the D1 cm 3 and D2 cm 3 delivered to bladders, recta, intestines, and colons, respectively. Results:The overall MDD and MADD of the 3D dose matrix from 19 cases of the testing group were (-0.01 ± 0.03) and (0.04 ± 0.01) Gy, respectively. The CI of the prescription dose was 0.70 ± 0.04. The DSC of 50%-150% prescription dose was 0.89-0.94. The mean deviation of D90 and Dmean to HR-CTVs were 2.22% and -4.30%, respectively. The maximum deviations of the D1 cm 3 and D2 cm 3 to bladders, recta, intestines, and colons were 2.46% and 2.58%, respectively. The 3D U-net deep learning model took 2.5 s on average to predict a patient′s dose. Conclusions:In this study, a 3D U-net-based deep learning model for predicting 3D dose distribution in the treatment of cervical cancer was established, thus laying a foundation for the automatic design of cervical cancer brachytherapy.

6.
Journal of Biomedical Engineering ; (6): 1108-1116, 2022.
Article in Chinese | WPRIM | ID: wpr-970648

ABSTRACT

The skin is the largest organ of the human body, and many visceral diseases will be directly reflected on the skin, so it is of great clinical significance to accurately segment the skin lesion images. To address the characteristics of complex color, blurred boundaries, and uneven scale information, a skin lesion image segmentation method based on dense atrous spatial pyramid pooling (DenseASPP) and attention mechanism is proposed. The method is based on the U-shaped network (U-Net). Firstly, a new encoder is redesigned to replace the ordinary convolutional stacking with a large number of residual connections, which can effectively retain key features even after expanding the network depth. Secondly, channel attention is fused with spatial attention, and residual connections are added so that the network can adaptively learn channel and spatial features of images. Finally, the DenseASPP module is introduced and redesigned to expand the perceptual field size and obtain multi-scale feature information. The algorithm proposed in this paper has obtained satisfactory results in the official public dataset of the International Skin Imaging Collaboration (ISIC 2016). The mean Intersection over Union (mIOU), sensitivity (SE), precision (PC), accuracy (ACC), and Dice coefficient (Dice) are 0.901 8, 0.945 9, 0.948 7, 0.968 1, 0.947 3, respectively. The experimental results demonstrate that the method in this paper can improve the segmentation effect of skin lesion images, and is expected to provide an auxiliary diagnosis for professional dermatologists.


Subject(s)
Humans , Skin/diagnostic imaging , Algorithms , Clinical Relevance , Learning , Image Processing, Computer-Assisted
7.
Chinese Journal of Medical Instrumentation ; (6): 377-381, 2022.
Article in Chinese | WPRIM | ID: wpr-939751

ABSTRACT

In order to better assist doctors in the diagnosis of dry eye and improve the ability of ophthalmologists to recognize the condition of meibomian gland, a meibomian gland image segmentation and enhancement method based on Mobile-U-Net network was proposed. Firstly, Mobile-Net is used as the coding part of U-Net for down sampling, and then features are extracted and fused with the features in decoder to guide image segmentation. Secondly, the segmentation of meibomian gland region is enhanced to assist doctors to judge the condition. Thirdly, a large number of meibomian gland images are collected to train and verify the semantic segmentation network, and the clarity evaluation index is used to verify the meibomian gland enhancement effect. The experimental results show that the similarity coefficient of the proposed method is stable at 92.71%, and the image clarity index is better than the similar dry eye detection instruments on the market.


Subject(s)
Humans , Deep Learning , Diagnostic Imaging , Dry Eye Syndromes , Image Processing, Computer-Assisted , Meibomian Glands/diagnostic imaging
8.
Journal of Biomedical Engineering ; (6): 166-174, 2022.
Article in Chinese | WPRIM | ID: wpr-928211

ABSTRACT

As an important basis for lesion determination and diagnosis, medical image segmentation has become one of the most important and hot research fields in the biomedical field, among which medical image segmentation algorithms based on full convolutional neural network and U-Net neural network have attracted more and more attention by researchers. At present, there are few reports on the application of medical image segmentation algorithms in the diagnosis of rectal cancer, and the accuracy of the segmentation results of rectal cancer is not high. In this paper, a convolutional network model of encoding and decoding combined with image clipping and pre-processing is proposed. On the basis of U-Net, this model replaced the traditional convolution block with the residual block, which effectively avoided the problem of gradient disappearance. In addition, the image enlargement method is also used to improve the generalization ability of the model. The test results on the data set provided by the "Teddy Cup" Data Mining Challenge showed that the residual block-based improved U-Net model proposed in this paper, combined with image clipping and preprocessing, could greatly improve the segmentation accuracy of rectal cancer, and the Dice coefficient obtained reached 0.97 on the verification set.


Subject(s)
Humans , Algorithms , Delayed Emergence from Anesthesia , Image Processing, Computer-Assisted , Rectal Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
9.
Biomedical Engineering Letters ; (4): 375-385, 2019.
Article in English | WPRIM | ID: wpr-785515

ABSTRACT

Unlike medical computed tomography (CT), dental CT often suffers from severe metal artifacts stemming from high-density materials employed for dental prostheses. Despite the many metal artifact reduction (MAR) methods available for medical CT, those methods do not sufficiently reduce metal artifacts in dental CT images because MAR performance is often compromised by the enamel layer of teeth, whose X-ray attenuation coefficient is not so different from that of prosthetic materials. We propose a deep learning-based metal segmentation method on the projection domain to improve MAR performance in dental CT. We adopted a simplified U-net for metal segmentation on the projection domain without using any information from the metal-artifacts-corrupted CT images. After training the network with the projection data of five patients, we segmented the metal objects on the projection data of other patients using the trained network parameters. With the segmentation results, we corrected the projection data by applying region filling inside the segmented region. We fused two CT images, one from the corrected projection data and the other from the original raw projection data, and then we forward-projected the fused CT image to get the fused projection data. To get the final corrected projection data, we replaced the metal regions in the original projection data with the ones in the fused projection data. To evaluate the efficacy of the proposed segmentation method on MAR, we compared the MAR performance of the proposed segmentation method with a conventional MAR method based on metal segmentation on the CT image domain. For the MAR performance evaluation, we considered the three primary MAR performance metrics: the relative error (REL), the sum of square difference (SSD), and the normalized absolute difference (NAD). The proposed segmentation method improved MAR performances by around 5.7% for REL, 6.8% for SSD, and 8.2% for NAD. The proposed metal segmentation method on the projection domain showed better MAR performance than the conventional segmentation on the CT image domain. We expect that the proposed segmentation method can improve the performance of the existing MAR methods that are based on metal segmentation on the CT image domain.


Subject(s)
Humans , Artifacts , Dental Enamel , Dental Prosthesis , Methods , NAD , Silver Sulfadiazine , Tooth
SELECTION OF CITATIONS
SEARCH DETAIL